variational network
Motion-Informed Deep Learning for Brain MR Image Reconstruction Framework
Chen, Zhifeng, Pawar, Kamlesh, Islam, Kh Tohidul, Peiris, Himashi, Egan, Gary, Chen, Zhaolin
Motion artifacts in Magnetic Resonance Imaging (MRI) are one of the frequently occurring artifacts due to patient movements during scanning. Motion is estimated to be present in approximately 30% of clinical MRI scans; however, motion has not been explicitly modeled within deep learning image reconstruction models. Deep learning (DL) algorithms have been demonstrated to be effective for both the image reconstruction task and the motion correction task, but the two tasks are considered separately. The image reconstruction task involves removing undersampling artifacts such as noise and aliasing artifacts, whereas motion correction involves removing artifacts including blurring, ghosting, and ringing. In this work, we propose a novel method to simultaneously accelerate imaging and correct motion. This is achieved by integrating a motion module into the deep learning-based MRI reconstruction process, enabling real-time detection and correction of motion. We model motion as a tightly integrated auxiliary layer in the deep learning model during training, making the deep learning model 'motion-informed'. During inference, image reconstruction is performed from undersampled raw k-space data using a trained motion-informed DL model. Experimental results demonstrate that the proposed motion-informed deep learning image reconstruction network outperformed the conventional image reconstruction network for motion-degraded MRI datasets.
- Research Report > New Finding (0.48)
- Research Report > Promising Solution (0.34)
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
Attention Hybrid Variational Net for Accelerated MRI Reconstruction
Shen, Guoyao, Hao, Boran, Li, Mengyu, Farris, Chad W., Paschalidis, Ioannis Ch., Anderson, Stephan W., Zhang, Xin
The application of compressed sensing (CS)-enabled data reconstruction for accelerating magnetic resonance imaging (MRI) remains a challenging problem. This is due to the fact that the information lost in k-space from the acceleration mask makes it difficult to reconstruct an image similar to the quality of a fully sampled image. Multiple deep learning-based structures have been proposed for MRI reconstruction using CS, both in the k-space and image domains as well as using unrolled optimization methods. However, the drawback of these structures is that they are not fully utilizing the information from both domains (k-space and image). Herein, we propose a deep learning-based attention hybrid variational network that performs learning in both the k-space and image domain. We evaluate our method on a well-known open-source MRI dataset and a clinical MRI dataset of patients diagnosed with strokes from our institution to demonstrate the performance of our network. In addition to quantitative evaluation, we undertook a blinded comparison of image quality across networks performed by a subspecialty trained radiologist. Overall, we demonstrate that our network achieves a superior performance among others under multiple reconstruction tasks.
Compressed Sensing MRI Reconstruction Regularized by VAEs with Structured Image Covariance
Duff, Margaret, Simpson, Ivor J. A., Ehrhardt, Matthias J., Campbell, Neill D. F.
Objective: This paper investigates how generative models, trained on ground-truth images, can be used \changes{as} priors for inverse problems, penalizing reconstructions far from images the generator can produce. The aim is that learned regularization will provide complex data-driven priors to inverse problems while still retaining the control and insight of a variational regularization method. Moreover, unsupervised learning, without paired training data, allows the learned regularizer to remain flexible to changes in the forward problem such as noise level, sampling pattern or coil sensitivities in MRI. Approach: We utilize variational autoencoders (VAEs) that generate not only an image but also a covariance uncertainty matrix for each image. The covariance can model changing uncertainty dependencies caused by structure in the image, such as edges or objects, and provides a new distance metric from the manifold of learned images. Main results: We evaluate these novel generative regularizers on retrospectively sub-sampled real-valued MRI measurements from the fastMRI dataset. We compare our proposed learned regularization against other unlearned regularization approaches and unsupervised and supervised deep learning methods. Significance: Our results show that the proposed method is competitive with other state-of-the-art methods and behaves consistently with changing sampling patterns and noise levels.
- Europe > United Kingdom (0.04)
- North America > Canada > Alberta (0.04)
Dual-Domain Self-Supervised Learning for Accelerated Non-Cartesian MRI Reconstruction
Zhou, Bo, Schlemper, Jo, Dey, Neel, Salehi, Seyed Sadegh Mohseni, Sheth, Kevin, Liu, Chi, Duncan, James S., Sofka, Michal
While enabling accelerated acquisition and improved reconstruction accuracy, current deep MRI reconstruction networks are typically supervised, require fully sampled data, and are limited to Cartesian sampling patterns. These factors limit their practical adoption as fully-sampled MRI is prohibitively time-consuming to acquire clinically. Further, non-Cartesian sampling patterns are particularly desirable as they are more amenable to acceleration and show improved motion robustness. To this end, we present a fully self-supervised approach for accelerated non-Cartesian MRI reconstruction which leverages self-supervision in both k-space and image domains. In training, the undersampled data are split into disjoint k-space domain partitions. For the k-space self-supervision, we train a network to reconstruct the input undersampled data from both the disjoint partitions and from itself. For the image-level self-supervision, we enforce appearance consistency obtained from the original undersampled data and the two partitions. Experimental results on our simulated multi-coil non-Cartesian MRI dataset demonstrate that DDSS can generate high-quality reconstruction that approaches the accuracy of the fully supervised reconstruction, outperforming previous baseline methods. Finally, DDSS is shown to scale to highly challenging real-world clinical MRI reconstruction acquired on a portable low-field (0.064 T) MRI scanner with no data available for supervised training while demonstrating improved image quality as compared to traditional reconstruction, as determined by a radiologist study.
- North America > United States > Connecticut > New Haven County > New Haven (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Connecticut > New Haven County > Guilford (0.04)
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Nuclear Medicine (0.89)
- Health & Medicine > Health Care Technology (0.89)
Iterative training of robust k-space interpolation networks for improved image reconstruction with limited scan specific training samples
Dawood, Peter, Breuer, Felix, Burd, Paul R., Homolya, István, Oberberger, Johannes, Jakob, Peter M., Blaimer, Martin
Purpose: To evaluate an iterative learning approach for enhanced performance of Robust Artificial-neural-networks for K-space Interpolation (RAKI), when only a limited amount of training data (auto-calibration signals, ACS) are available for accelerated standard 2D imaging. Methods: In a first step, the RAKI model was optimized for the case of strongly limited training data amount. In the iterative learning approach (termed iterative RAKI), the optimized RAKI model is initially trained using original and augmented ACS obtained from a linear parallel imaging reconstruction. Subsequently, the RAKI convolution filters are refined iteratively using original and augmented ACS extracted from the previous RAKI reconstruction. Evaluation was carried out on 200 retrospectively undersampled in-vivo datasets from the fastMRI neuro database with different contrast settings. Results: For limited training data (18 and 22 ACS lines for R=4 and R=5, respectively), iterative RAKI outperforms standard RAKI by reducing residual artefacts and yields strong noise suppression when compared to standard parallel imaging, underlined by quantitative reconstruction quality metrics. In combination with a phase constraint, further reconstruction improvements can be achieved. Additionally, iterative RAKI shows better performance than both GRAPPA and RAKI in case of pre-scan calibration with varying contrast between training-and undersampled data. Conclusion: The iterative learning approach with RAKI benefits from standard RAKI's well known noise suppression feature but requires less original training data for the accurate reconstruction of standard 2D images thereby improving net acceleration.
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.14)
- Europe > Germany > Bavaria > Lower Franconia > Würzburg (0.05)
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- (6 more...)
- Health & Medicine > Therapeutic Area (0.46)
- Health & Medicine > Diagnostic Medicine (0.46)
Parallel Weight Consolidation: A Brain Segmentation Case Study
McClure, Patrick, Zheng, Charles, Pereira, Francisco, Kaczmarzyk, Jakub, Rogers-Lee, John, Nielson, Dylan, Bandettini, Peter
Collecting the large datasets needed to train deep neural networks can be very difficult, particularly for the many applications for which sharing and pooling data is complicated by practical, ethical, or legal concerns. However, it may be the case that derivative datasets or predictive models developed within individual sites can be shared and combined with fewer restrictions. Training on distributed datasets and combining the resulting networks is often viewed as continual learning, but these methods require networks to be trained sequentially. In this paper, we introduce parallel weight consolidation (PWC), a continual learning method to consolidate the weights of neural networks trained in parallel on independent datasets. We perform a brain segmentation case study using PWC to consolidate several dilated convolutional neural networks trained in parallel on independent structural magnetic resonance imaging (sMRI) datasets from different sites. We found that PWC led to increased performance on held-out test sets from the different sites, as well as on a very large and completely independent multi-site dataset. This demonstrates the feasibility of PWC for combining the knowledge learned by networks trained on different datasets.
- Health & Medicine > Diagnostic Medicine > Imaging (0.88)
- Health & Medicine > Therapeutic Area > Neurology (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)